Avoiding communication in primal and dual block coordinate descent methods
نویسندگان
چکیده
Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every s iterations–where s is a tuning parameter– instead of every iteration for the regularized least-squares problem. We derive communicationavoiding variants of the primal and dual block coordinate descent methods which reduce the number of synchronizations by a factor of s on distributed-memory parallel machines without altering the convergence rate. Our communication-avoiding algorithms attain modeled strong scaling speedups of 14× and 165× on a modern supercomputer using MPI and Apache Spark, respectively. Our algorithms attain modeled weak scaling speedups of 12× and 396× on the same machine using MPI and Apache Spark, respectively.
منابع مشابه
Stochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems
We consider convex-concave saddle point problems with a separable structure and non-strongly convex functions. We propose an efficient stochastic block coordinate descent method using adaptive primal-dual updates, which enables flexible parallel optimization for large-scale problems. Our method shares the efficiency and flexibility of block coordinate descent methods with the simplicity of prim...
متن کاملPrimal-Dual methods for sparse constrained matrix completion
We develop scalable algorithms for regular and non-negative matrix completion. In particular, we base the methods on trace-norm regularization that induces a low rank predicted matrix. The regularization problem is solved via a constraint generation method that explicitly maintains a sparse dual and the corresponding low rank primal solution. We provide a new dual block coordinate descent algor...
متن کاملAdaptive Stochastic Primal-Dual Coordinate Descent for Separable Saddle Point Problems
We consider a generic convex-concave saddle point problem with a separable structure, a form that covers a wide-ranged machine learning applications. Under this problem structure, we follow the framework of primal-dual updates for saddle point problems, and incorporate stochastic block coordinate descent with adaptive stepsizes into this framework. We theoretically show that our proposal of ada...
متن کاملBlock-proximal methods with spatially adapted acceleration
We study and develop (stochastic) primal–dual block-coordinate descent methods based on the method of Chambolle and Pock. Our methods have known convergence rates for the iterates and the ergodic gap: O(1/N 2) if each each block is strongly convex, O(1/N ) if no convexity is present, and more generally a mixed rate O(1/N 2) + O(1/N ) for strongly convex blocks, if only some blocks are strongly ...
متن کاملSmooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization
We propose a new randomized coordinate descent method for a convex optimization template with broad applications. Our analysis relies on a novel combination of four ideas applied to the primal-dual gap function: smoothing, acceleration, homotopy, and coordinate descent with non-uniform sampling. As a result, our method features the first convergence rate guarantees among the coordinate descent ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1612.04003 شماره
صفحات -
تاریخ انتشار 2016